Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 145
Filtrar
2.
Med J Aust ; 207(5): 201-205, 2017 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-28987133

RESUMEN

OBJECTIVE: To evaluate hospital length of stay (LOS) and admission rates before and after implementation of an evidence-based, accelerated diagnostic protocol (ADP) for patients presenting to emergency departments (EDs) with chest pain. DESIGN: Quasi-experimental design, with interrupted time series analysis for the period October 2013 - November 2015. Setting, participants: Adults presenting with chest pain to EDs of 16 public hospitals in Queensland. INTERVENTION: Implementation of the ADP by structured clinical re-design. MAIN OUTCOME MEASURES: Primary outcome: hospital LOS. SECONDARY OUTCOMES: ED LOS, hospital admission rate, proportion of patients identified as being at low risk of an acute coronary syndrome (ACS). RESULTS: Outcomes were recorded for 30 769 patients presenting before and 23 699 presenting after implementation of the ADP. Following implementation, 21.3% of patients were identified by the ADP as being at low risk for an ACS. Following implementation of the ADP, mean hospital LOS fell from 57.7 to 47.3 hours (rate ratio [RR], 0.82; 95% CI, 0.74-0.91) and mean ED LOS for all patients presenting with chest pain fell from 292 to 256 minutes (RR, 0.80; 95% CI, 0.72-0.89). The hospital admission rate fell from 68.3% (95% CI, 59.3-78.5%) to 54.9% (95% CI, 44.7-67.6%; P < 0.01). The estimated release in financial capacity amounted to $2.3 million as the result of reduced ED LOS and $11.2 million through fewer hospital admissions. CONCLUSIONS: Implementing an evidence-based ADP for assessing patients with chest pain was feasible across a range of hospital types, and achieved a substantial release of health service capacity through reductions in hospital admissions and ED LOS.


Asunto(s)
Síndrome Coronario Agudo/diagnóstico , Dolor en el Pecho/diagnóstico , Protocolos Clínicos/normas , Hospitalización/estadística & datos numéricos , Tiempo de Internación/estadística & datos numéricos , Medición de Riesgo/métodos , Adulto , Anciano , Servicio de Urgencia en Hospital , Práctica Clínica Basada en la Evidencia , Femenino , Hospitalización/economía , Humanos , Tiempo de Internación/economía , Masculino , Persona de Mediana Edad , Queensland/epidemiología , Medición de Riesgo/clasificación
3.
BMC Med Inform Decis Mak ; 17(1): 35, 2017 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-28390405

RESUMEN

BACKGROUND: An accurate risk stratification tool is critical in identifying patients who are at high risk of frequent hospital readmissions. While 30-day hospital readmissions have been widely studied, there is increasing interest in identifying potential high-cost users or frequent hospital admitters. In this study, we aimed to derive and validate a risk stratification tool to predict frequent hospital admitters. METHODS: We conducted a retrospective cohort study using the readily available clinical and administrative data from the electronic health records of a tertiary hospital in Singapore. The primary outcome was chosen as three or more inpatient readmissions within 12 months of index discharge. We used univariable and multivariable logistic regression models to build a frequent hospital admission risk score (FAM-FACE-SG) by incorporating demographics, indicators of socioeconomic status, prior healthcare utilization, markers of acute illness burden and markers of chronic illness burden. We further validated the risk score on a separate dataset and compared its performance with the LACE index using the receiver operating characteristic analysis. RESULTS: Our study included 25,244 patients, with 70% randomly selected patients for risk score derivation and the remaining 30% for validation. Overall, 4,322 patients (17.1%) met the outcome. The final FAM-FACE-SG score consisted of nine components: Furosemide (Intravenous 40 mg and above during index admission); Admissions in past one year; Medifund (Required financial assistance); Frequent emergency department (ED) use (≥3 ED visits in 6 month before index admission); Anti-depressants in past one year; Charlson comorbidity index; End Stage Renal Failure on Dialysis; Subsidized ward stay; and Geriatric patient or not. In the experiments, the FAM-FACE-SG score had good discriminative ability with an area under the curve (AUC) of 0.839 (95% confidence interval [CI]: 0.825-0.853) for risk prediction of frequent hospital admission. In comparison, the LACE index only achieved an AUC of 0.761 (0.745-0.777). CONCLUSIONS: The FAM-FACE-SG score shows strong potential for implementation to provide near real-time prediction of frequent admissions. It may serve as the first step to identify high risk patients to receive resource intensive interventions.


Asunto(s)
Registros Electrónicos de Salud/estadística & datos numéricos , Readmisión del Paciente/estadística & datos numéricos , Medición de Riesgo/estadística & datos numéricos , Centros de Atención Terciaria/estadística & datos numéricos , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Medición de Riesgo/clasificación , Singapur
4.
Biomarkers ; 22(3-4): 189-199, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-27299923

RESUMEN

Precise estimation of the absolute risk for CVD events is necessary when making treatment recommendations for patients. A number of multivariate risk models have been developed for estimation of cardiovascular risk in asymptomatic individuals based upon assessment of multiple variables. Due to the inherent limitation of risk models, several novel risk markers including serum biomarkers have been studied in an attempt to improve the cardiovascular risk prediction above and beyond the established risk factors. In this review, we discuss the role of underappreciated biomarkers such as red cell distribution width (RDW), cystatin C (cysC), and homocysteine (Hcy) as well as imaging biomarkers in cardiovascular risk reclassification, and highlight their utility as additional source of information in patients with intermediate risk.


Asunto(s)
Biomarcadores/sangre , Enfermedades Cardiovasculares/diagnóstico , Medición de Riesgo/clasificación , Enfermedades Cardiovasculares/diagnóstico por imagen , Cistatina C/sangre , Índices de Eritrocitos , Femenino , Homocisteína/sangre , Humanos , Masculino , Medición de Riesgo/métodos
5.
Health Serv Res ; 52(4): 1277-1296, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-27714791

RESUMEN

OBJECTIVE: There is increasing interest in identifying high-quality physicians, such as whether physicians perform above or below a threshold level. To evaluate whether current methods accurately distinguish above- versus below-threshold physicians, we estimate misclassification rates for two-category identification systems. DATA SOURCES: Claims data for Medicare fee-for-service beneficiaries residing in Florida or New York in 2010. STUDY DESIGN: Estimate colorectal cancer, glaucoma, and diabetes quality scores for 23,085 physicians. Use a beta-binomial model to estimate physician score reliabilities. Compute the proportion of physicians whose performance tier would be misclassified under three scoring systems. PRINCIPAL FINDINGS: In the three scoring systems, misclassification ranges were 8.6-25.7 percent, 6.4-22.8 percent, and 4.5-21.7%. True positive rate ranges were 72.9-97.0 percent, 83.4-100.0 percent, and 34.7-88.2 percent. True negative rate ranges were 68.5-91.6 percent, 10.5-92.4 percent, and 81.1-99.9 percent. Positive predictive value ranges were 70.5-91.6 percent, 77.0-97.3 percent, and 55.2-99.1 percent. CONCLUSIONS: Current methods for profiling physicians on quality may produce misleading results, as the number of eligible events is typically small. Misclassification is a policy-relevant measure of the potential impact of tiering on providers, payers, and patients. Quantifying misclassification rates should inform the construction of high-performance networks and quality improvement initiatives.


Asunto(s)
Médicos de Atención Primaria/normas , Calidad de la Atención de Salud/normas , Medición de Riesgo/clasificación , Algoritmos , Planes de Aranceles por Servicios , Florida , Humanos , Revisión de Utilización de Seguros
6.
J Clin Endocrinol Metab ; 101(11): 4244-4250, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27588439

RESUMEN

CONTEXT: Young-onset obesity is strongly associated with the early development of type 2 diabetes (T2D). Genetic risk scores (GRSs) related to T2D might help predicting the early impairment of glucose homeostasis in obese youths. OBJECTIVE: Our objective was to investigate the contributions of four GRSs (associated with: T2D [GRS-T2D], beta-cell function [GRS-ß], insulin resistance [GRS-IR], and body mass index) to the variation of traits derived from oral glucose tolerance test (OGTT) in obese and normal-weight children and young adults. DESIGN: This was a cross-sectional association study. PATIENTS: A total of 1076 obese children/adolescents (age = 11.4 ± 2.8 years) and 1265 normal-weight young volunteers (age = 21.1 ± 4.4 years) of European ancestry were recruited from pediatric obesity clinics and general population, respectively. INTERVENTION: Standard OGTT was the intervention in this study. MAIN OUTCOME MEASURES: Associations between GRSs and OGTT-derived traits including fasting glucose and insulin, insulinogenic index, insulin sensitivity index, disposition index (DI) and associations between GRSs and pre-diabetic conditions were measured. RESULTS: GRS-ß significantly associated with fasting glucose (ß = 0.019; P = 3.5 × 10-4) and DI (ß = -0.031; P = 8.9 × 10-4, last quartile 18% lower than first) in obese children, and nominally associated with fasting glucose (ß = 0.009; P = 0.017) and DI (ß = -0.030; P = 1.1 × 10-3, last quartile 11% lower than first) in normal-weight youths. GRS-T2D showed weaker contribution to fasting glucose and DI compared to GRS-ß, in both obese and normal-weight youths. GRS associated with insulin resistance and GRS associated with body mass index did not associate with any traits. None of the GRSs associated with prediabetes, which affected only 4% of participants overall. CONCLUSION: Single nucleotide polymorphisms identified by genome-wide association studies to influence beta-cell function were associated with fasting glucose and indices of insulin secretion in youths, especially in obese children.


Asunto(s)
Glucemia/metabolismo , Diabetes Mellitus Tipo 2/metabolismo , Predisposición Genética a la Enfermedad/clasificación , Células Secretoras de Insulina/metabolismo , Insulina/metabolismo , Obesidad Infantil/metabolismo , Adolescente , Adulto , Niño , Estudios de Cohortes , Estudios Transversales , Diabetes Mellitus Tipo 2/epidemiología , Diabetes Mellitus Tipo 2/genética , Francia/epidemiología , Estudio de Asociación del Genoma Completo , Prueba de Tolerancia a la Glucosa , Humanos , Secreción de Insulina , Italia/epidemiología , Masculino , Obesidad Infantil/epidemiología , Obesidad Infantil/genética , Polimorfismo de Nucleótido Simple , Medición de Riesgo/clasificación , Adulto Joven
7.
Work ; 51(4): 703-13, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26409941

RESUMEN

BACKGROUND: The identification of hazards or risk factors at the workplace level is a crucial procedure to the risk identification, risk analysis and risk evaluation. OBJECTIVE: This article presents a hazard or risk factors taxonomy, to be applied at the workplace level, during the systematic hazards identification. METHODS: The taxonomy was based on evidences literature, including technical documents, standards, regulations, good-practice documents and toxicology databases. RESULTS: The taxonomy was organized as a matrix (Risk Factors-Disorders Matrix), an extensive list of occupational hazards. Hazards were organized in terms of the potential individual dominant consequences: in terms of accidents (injuries), occupational disease and negative social, mental or physical well-being (like dissatisfaction and discomfort complaints not resulting from injuries or diseases symptomatology). The specific hazards in each work context were characterized by three summary tables: (1) Accidents-Risk Factors Table, (2) Diseases-Risk Factors Table and (3) Negative Well-being-Risk Factors Table. CONCLUSIONS: Risk factors are coded according to the Risk Factors-Disorders Matrix and the dominant potential disorders are identified in the Risk Factors Tables. The inclusion of individual, psychosocial, emerging and combined hazards in the Matrix, contributes to focusing the risk identification in non-traditional sources of risk during risk assessment procedures.


Asunto(s)
Accidentes de Trabajo , Enfermedades Profesionales , Medición de Riesgo/clasificación , Humanos , Satisfacción en el Trabajo , Enfermedades Profesionales/etiología , Factores de Riesgo , Lugar de Trabajo
8.
Einstein (Sao Paulo) ; 13(2): 196-201, 2015.
Artículo en Inglés, Portugués | MEDLINE | ID: mdl-26154540

RESUMEN

OBJECTIVE: To evaluate the impact of traditional check-up appointment on the progression of the cardiovascular risk throughout time. METHODS: This retrospective cohort study included 11,126 medical records of asymptomatic executives who were evaluated between January, 2005 and October, 2008. Variables included participants' demographics characteristics, smoking habit, history of cardiovascular diseases, diabetes, dyslipidemia, total cholesterol, HDL, triglycerides, glucose, c-reactive protein, waist circumference, hepatic steatosis, Framingham score, metabolic syndrome, level of physical activity, stress, alcohol consumption, and body mass index. RESULTS: A total of 3,150 patients was included in the final analysis. A worsening was observed in all risk factors, excepting in smoking habit, incidence of myocardial infarction or stroke and in the number of individuals classified as medium or high risk for cardiovascular events. In addition, a decrease in stress level and alcohol consumption was also seen. CONCLUSION: The adoption of consistent health policies by companies is imperative in order to reduce the risk factors and the future costs associated with illness and absenteeism.


Asunto(s)
Enfermedades Cardiovasculares/diagnóstico , Progresión de la Enfermedad , Tamizaje Masivo/métodos , Examen Físico/métodos , Adulto , Índice de Masa Corporal , Enfermedades Cardiovasculares/prevención & control , Colesterol/sangre , Diabetes Mellitus/sangre , Femenino , Humanos , Estilo de Vida , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Medición de Riesgo/clasificación , Medición de Riesgo/métodos , Factores de Riesgo , Factores Sexuales , Fumar , Estrés Psicológico/diagnóstico , Factores de Tiempo
9.
PLoS One ; 10(6): e0129966, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26047133

RESUMEN

OBJECTIVES: The prevalence of cardiovascular disease risk factors has increased worldwide. However, the prevalence and clustering of cardiovascular disease risk factors among Tibetans is currently unknown. We aimed to explore the prevalence and clustering of cardiovascular disease risk factors among Tibetan adults in China. METHODS: In 2011, 1659 Tibetan adults (aged ≥ 18 years) from Changdu, China were recruited to this cross-section study. The questionnaire, physical examinations and laboratory testing were completed and the prevalence of cardiovascular disease risk factors, including hypertension, diabetes, overweight/obesity, dyslipidemia, and current smoking, were counted. The association between the clustering of cardiovascular disease risk factors and demographic characteristics, and geographic altitude were assessed. RESULTS: The age-standardized prevalence of hypertension, diabetes, overweight or obesity, dyslipidemia, and current smoking were 62.4%, 6.4%, 34.3%, 42.7%, and 6.1%, respectively, and these risk factors were associated with age, gender, education level, yearly family income, altitude, occupation, and butter tea consumption (P < 0.05). Overall, the age-adjusted prevalence of clustering of ≥ 1, ≥ 2, and ≥ 3 cardiovascular disease risk factors were 79.4%, 47.1%, and 20.9%, respectively. There appeared higher clustering of ≥ 2 and ≥ 3 cardiovascular disease risk factors among Tibetans with higher education level and family income yearly, and those living at an altitude < 3500 m and in a township. CONCLUSIONS: The prevalence of cardiovascular disease risk factors, especially hypertension, was high in Tibetans. Moreover, there was an increased clustering of cardiovascular disease risk factors among those with higher socioeconomic status, lamas and those living at an altitude < 3500 m. These findings suggest that without the immediate implementation of an efficient policy to control these risk factors, cardiovascular disease will eventually become a major disease burden among Tibetans.


Asunto(s)
Enfermedades Cardiovasculares/epidemiología , Vigilancia de la Población/métodos , Medición de Riesgo/métodos , Adolescente , Adulto , Anciano , Pueblo Asiatico , Enfermedades Cardiovasculares/etnología , Enfermedades Cardiovasculares/etiología , Análisis por Conglomerados , Estudios Transversales , Dislipidemias/complicaciones , Humanos , Hipertensión/complicaciones , Persona de Mediana Edad , Obesidad/complicaciones , Sobrepeso/complicaciones , Prevalencia , Medición de Riesgo/clasificación , Medición de Riesgo/estadística & datos numéricos , Factores de Riesgo , Fumar/efectos adversos , Encuestas y Cuestionarios , Tibet/epidemiología , Adulto Joven
10.
Av. diabetol ; 31(3): 102-112, mayo-jun. 2015.
Artículo en Español | IBECS | ID: ibc-140305

RESUMEN

La enfermedad cardiovascular (ECV) representa la primera causa de mortalidad en las personas con diabetes mellitus, entre las cuales, el riesgo de mortalidad cardiovascular es 2-4 veces mayor que el de la población general. Las guías de práctica clínica recomiendan calcular el riesgo de ECV en la diabetes; sin embargo, se han desarrollado pocos modelos para estimar este riesgo en las personas con diabetes. Los primeros modelos de predicción de ECV en la diabetes mellitus tipo 2, que incluyeron junto a las variables de riesgo clásicas la HbA1c y los años de evolución, no son contemporáneos y no funcionan en poblaciones diferentes a las que participaron en su desarrollo. La creación de modelos de riesgo propios actuales y validados en diferentes poblaciones permitiría realizar intervenciones preventivas más agresivas y tempranas, y centradas en el paciente, con la finalidad de frenar la epidemia de ECV que padecen las personas con diabetes


Cardiovascular disease (CVD) remains the first cause of death in patients with diabetes mellitus. Cardiovascular mortality is between 2 and 4 times as high as the risk of matched controls in the general population. Although practice guidelines recommend calculating CVD risk in diabetes, few models for estimating cardiovascular risk have been developed specifically for diabetic patients. The first ones, taking into account HbA1c and diabetes duration plus classical risk factors, is not contemporary and perform sub-optimally in different populations with diabetes. Constructing updated population-derived and externally validated cardiovascular risk models will yield more aggressive patient-centered preventive interventions to curb the ongoing epidemic of CVD in patients with diabetes


Asunto(s)
Femenino , Humanos , Masculino , Calibración/normas , Diabetes Mellitus Tipo 2/sangre , Diabetes Mellitus Tipo 2/metabolismo , Enfermedad Coronaria/congénito , Enfermedad Coronaria/metabolismo , Calidad de Vida/psicología , España/etnología , Medición de Riesgo , Medición de Riesgo/métodos , Diabetes Mellitus Tipo 2/genética , Diabetes Mellitus Tipo 2/patología , Enfermedad Coronaria/complicaciones , Enfermedad Coronaria/genética , /normas , Calidad de Vida , Medición de Riesgo/clasificación , Medición de Riesgo/ética
11.
Eur J Epidemiol ; 30(4): 299-304, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25724473

RESUMEN

The Net Reclassification Improvement (NRI) has become a popular metric for evaluating improvement in disease prediction models through the past years. The concept is relatively straightforward but usage and interpretation has been different across studies. While no thresholds exist for evaluating the degree of improvement, many studies have relied solely on the significance of the NRI estimate. However, recent studies recommend that statistical testing with the NRI should be avoided. We propose using confidence ellipses around the estimated values of event and non-event NRIs which might provide the best measure of variability around the point estimates. Our developments are illustrated using practical examples from EPIC-Potsdam study.


Asunto(s)
Enfermedad Crónica/clasificación , Enfermedad Crónica/epidemiología , Modelos Estadísticos , Valor Predictivo de las Pruebas , Medición de Riesgo/clasificación , Anciano , Intervalos de Confianza , Interpretación Estadística de Datos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Medición de Riesgo/métodos , Factores de Riesgo
12.
Int Arch Occup Environ Health ; 88(8): 1069-75, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25702173

RESUMEN

BACKGROUND: Prognostic models including age, self-rated health and prior sickness absence (SA) have been found to predict high (≥ 30) SA days and high (≥ 3) SA episodes during 1-year follow-up. More predictors of high SA are needed to improve these SA prognostic models. The purpose of this study was to investigate fatigue as new predictor in SA prognostic models by using risk reclassification methods and measures. METHODS: This was a prospective cohort study with 1-year follow-up of 1,137 office workers. Fatigue was measured at baseline with the 20-item checklist individual strength and added to the existing SA prognostic models. SA days and episodes during 1-year follow-up were retrieved from an occupational health service register. The added value of fatigue was investigated with Net Reclassification Index (NRI) and integrated discrimination improvement (IDI) measures. RESULTS: In total, 579 (51 %) office workers had complete data for analysis. Fatigue was prospectively associated with both high SA days and episodes. The NRI revealed that adding fatigue to the SA days model correctly reclassified workers with high SA days, but incorrectly reclassified workers without high SA days. The IDI indicated no improvement in risk discrimination by the SA days model. Both NRI and IDI showed that the prognostic model predicting high SA episodes did not improve when fatigue was added as predictor variable. CONCLUSION: In the present study, fatigue increased false-positive rates which may reduce the cost-effectiveness of interventions for preventing SA.


Asunto(s)
Fatiga/epidemiología , Enfermedades Profesionales/epidemiología , Ausencia por Enfermedad/clasificación , Absentismo , Adulto , Lista de Verificación , Fatiga/etiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Teóricos , Enfermedades Profesionales/etiología , Servicios de Salud del Trabajador/estadística & datos numéricos , Estudios Prospectivos , Medición de Riesgo/clasificación , Factores de Riesgo , Ausencia por Enfermedad/estadística & datos numéricos
14.
Regul Toxicol Pharmacol ; 70(3): 590-604, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25239592

RESUMEN

Recent EU legislation has introduced endocrine disrupting properties as a hazard-based "cut-off" criterion for the approval of active substances as pesticides and biocides. Currently, no specific science-based approach for the assessment of substances with endocrine disrupting properties has been agreed upon, although this new legislation provides interim criteria based on classification and labelling. Different proposals for decision making on potential endocrine disrupting properties in human health risk assessment have been developed by the German Federal Institute for Risk Assessment (BfR) and other regulatory bodies. All these frameworks, although differing with regard to hazard characterisation, include a toxicological assessment of adversity of the effects, the evaluation of underlying modes/mechanisms of action in animals and considerations concerning the relevance of effects to humans. Three options for regulatory decision making were tested upon 39 pesticides for their applicability and to analyze their potential impact on the regulatory status of active substances that are currently approved for use in Europe: Option 1, based purely on hazard identification (adversity, mode of action, and the plausibility that both are related); Option 2, based on hazard identification and additional elements of hazard characterisation (severity and potency); Option 3, based on the interim criteria laid down in the recent EU pesticides legislation. Additionally, the data analysed in this study were used to address the questions, which parts of the endocrine system were affected, which studies were the most sensitive and whether no observed adverse effect levels were observed for substance with ED properties. The results of this exercise represent preliminary categorisations and must not be used as a basis for definitive regulatory decisions. They demonstrate that a combination of criteria for hazard identification with additional criteria of hazard characterisation allows prioritising and differentiating between substances with regard to their regulatory concern. It is proposed to integrate these elements into a decision matrix to be used within a weight of evidence approach for the toxicological categorisation of relevant endocrine disruptors and to consider all parts of the endocrine system for regulatory decision making on endocrine disruption.


Asunto(s)
Toma de Decisiones , Disruptores Endocrinos/toxicidad , Plaguicidas/toxicidad , Animales , Disruptores Endocrinos/clasificación , Unión Europea , Regulación Gubernamental , Humanos , Plaguicidas/clasificación , Medición de Riesgo/clasificación , Medición de Riesgo/legislación & jurisprudencia , Medición de Riesgo/métodos
15.
Am J Hematol ; 89(8): 813-8, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24782398

RESUMEN

Approximately 30% of patients with chronic myelomonocytic leukemia (CMML) have karyotypic abnormalities and this low frequency has made using cytogenetic data for the prognostication of CMML patients challenging. Recently, a three-tiered cytogenetic risk stratification system for CMML patients has been proposed by a Spanish study group. Here we assessed the prognostic impact of cytogenetic abnormalities on overall survival (OS) and leukemia-free survival (LFS) in 417 CMML patients from our institution. Overall, the Spanish cytogenetic risk effectively stratified patients into different risk groups, with a median OS of 33 months in the low-, 24 months in intermediate- and 14 months in the high-risk groups. Within the proposed high risk group, however, marked differences in OS were observed. Patients with isolated trisomy 8 showed a median OS of 22 months, similar to the intermediate-risk group (P = 0.132), but significantly better than other patients in the high-risk group (P = 0.018). Furthermore, patients with more than three chromosomal abnormalities showed a significantly shorter OS compared with patients with three abnormalities (8 vs. 15 months, P = 0.004), suggesting possible a separate risk category. If we simply moved trisomy 8 to the intermediate risk category, the modified cytogenetic grouping would provide a better separation of OS and LFS; and its prognostic impact was independent of other risk parameters. Our study results strongly advocate for the incorporation of cytogenetic information in the risk model for CMML.


Asunto(s)
Aberraciones Cromosómicas , Leucemia Mielomonocítica Crónica/genética , Trisomía , Adulto , Anciano , Anciano de 80 o más Años , Cromosomas Humanos Par 8 , Femenino , Humanos , Cariotipificación , Leucemia Mielomonocítica Crónica/clasificación , Leucemia Mielomonocítica Crónica/mortalidad , Leucemia Mielomonocítica Crónica/patología , Masculino , Persona de Mediana Edad , Riesgo , Medición de Riesgo/clasificación , Análisis de Supervivencia
16.
Pharmacoepidemiol Drug Saf ; 23(7): 667-78, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24821575

RESUMEN

BACKGROUND: The need for formal and structured approaches for benefit-risk assessment of medicines is increasing, as is the complexity of the scientific questions addressed before making decisions on the benefit-risk balance of medicines. We systematically collected, appraised and classified available benefit-risk methodologies to facilitate and inform their future use. METHODS: A systematic review of publications identified benefit-risk assessment methodologies. Methodologies were appraised on their fundamental principles, features, graphical representations, assessability and accessibility. We created a taxonomy of methodologies to facilitate understanding and choice. RESULTS: We identified 49 methodologies, critically appraised and classified them into four categories: frameworks, metrics, estimation techniques and utility survey techniques. Eight frameworks describe qualitative steps in benefit-risk assessment and eight quantify benefit-risk balance. Nine metric indices include threshold indices to measure either benefit or risk; health indices measure quality-of-life over time; and trade-off indices integrate benefits and risks. Six estimation techniques support benefit-risk modelling and evidence synthesis. Four utility survey techniques elicit robust value preferences from relevant stakeholders to the benefit-risk decisions. CONCLUSIONS: Methodologies to help benefit-risk assessments of medicines are diverse and each is associated with different limitations and strengths. There is not a 'one-size-fits-all' method, and a combination of methods may be needed for each benefit-risk assessment. The taxonomy introduced herein may guide choice of adequate methodologies. Finally, we recommend 13 of 49 methodologies for further appraisal for use in the real-life benefit-risk assessment of medicines.


Asunto(s)
Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/epidemiología , Modelos Estadísticos , Medición de Riesgo/métodos , Toma de Decisiones , Humanos , Preparaciones Farmacéuticas/administración & dosificación , Calidad de Vida , Medición de Riesgo/clasificación
17.
Clin Rehabil ; 28(12): 1218-24, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24849795

RESUMEN

OBJECTIVE: To evaluate relative accuracy of a newly developed Stroke Assessment of Fall Risk (SAFR) for classifying fallers and non-fallers, compared with a health system fall risk screening tool, the Fall Harm Risk Screen. DESIGN AND SETTING: Prospective quality improvement study conducted at an inpatient stroke rehabilitation unit at a large urban university hospital. PARTICIPANTS: Patients admitted for inpatient stroke rehabilitation (N = 419) with imaging or clinical evidence of ischemic or hemorrhagic stroke, between 1 August 2009 and 31 July 2010. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Sensitivity, specificity, and area under the curve for Receiver Operating Characteristic Curves of both scales' classifications, based on fall risk score completed upon admission to inpatient stroke rehabilitation. RESULTS: A total of 68 (16%) participants fell at least once. The SAFR was significantly more accurate than the Fall Harm Risk Screen (p < 0.001), with area under the curve of 0.73, positive predictive value of 0.29, and negative predictive value of 0.94. For the Fall Harm Risk Screen, area under the curve was 0.56, positive predictive value was 0.19, and negative predictive value was 0.86. Sensitivity and specificity of the SAFR (0.78 and 0.63, respectively) was higher than the Fall Harm Risk Screen (0.57 and 0.48, respectively). CONCLUSIONS: An evidence-derived, population-specific fall risk assessment may more accurately predict fallers than a general fall risk screen for stroke rehabilitation patients. While the SAFR improves upon the accuracy of a general assessment tool, additional refinement may be warranted.


Asunto(s)
Accidentes por Caídas/prevención & control , Medición de Riesgo/clasificación , Accidente Cerebrovascular/complicaciones , Accidentes por Caídas/estadística & datos numéricos , Distribución por Edad , Área Bajo la Curva , Femenino , Humanos , Masculino , Persona de Mediana Edad , Evaluación en Enfermería , Valor Predictivo de las Pruebas , Estudios Prospectivos , Mejoramiento de la Calidad , Curva ROC , Centros de Rehabilitación , Medición de Riesgo/métodos , Accidente Cerebrovascular/clasificación , Rehabilitación de Accidente Cerebrovascular
18.
Ann Intern Med ; 160(2): 122-31, 2014 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-24592497

RESUMEN

The net reclassification improvement (NRI) is an increasingly popular measure for evaluating improvements in risk predictions. This article details a review of 67 publications in high-impact general clinical journals that considered the NRI. Incomplete reporting of NRI methods, incorrect calculation, and common misinterpretations were found. To aid improved applications of the NRI, the article elaborates on several aspects of the computation and interpretation in various settings. Limitations and controversies are discussed, including the effect of miscalibration of prediction models, the use of the continuous NRI and "clinical NRI," and the relation with decision analytic measures. A systematic approach toward presenting NRI analysis is proposed: Detail and motivate the methods used for computation of the NRI, use clinically meaningful risk cutoffs for the category-based NRI, report both NRI components, address issues of calibration, and do not interpret the overall NRI as a percentage of the study population reclassified. Promising NRI findings need to be followed with decision analytic or formal cost-effectiveness evaluations.


Asunto(s)
Modelos Estadísticos , Medición de Riesgo/clasificación , Interpretación Estadística de Datos , Técnicas de Apoyo para la Decisión , Humanos , Medición de Riesgo/métodos
20.
Stat Med ; 33(11): 1914-27, 2014 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-24353130

RESUMEN

Risk prediction models play an important role in prevention and treatment of several diseases. Models that are in clinical use are often refined and improved. In many instances, the most efficient way to improve a successful model is to identify subgroups for which there is a specific biological rationale for improvement and tailor the improved model to individuals in these subgroups, an approach especially in line with personalized medicine. At present, we lack statistical tools to evaluate improvements targeted to specific subgroups. Here, we propose simple tools to fill this gap. First, we extend a recently proposed measure, the Integrated Discrimination Improvement, using a linear model with covariates representing the subgroups. Next, we develop graphical and numerical tools that compare reclassification of two models, focusing only on those subjects for whom the two models reclassify differently. We apply these approaches to BRCAPRO, a genetic risk prediction model for breast and ovarian cancer, using data from MD Anderson Cancer Center. We also conduct a simulation study to investigate properties of the new reclassification measure and compare it with currently used measures. Our results show that the proposed tools can successfully uncover subgroup specific model improvements.


Asunto(s)
Interpretación Estadística de Datos , Modelos Genéticos , Medición de Riesgo/métodos , Neoplasias de la Mama/genética , Simulación por Computador , Femenino , Humanos , Neoplasias Ováricas/genética , Medición de Riesgo/clasificación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...